首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In many phase III clinical trials, it is desirable to separately assess the treatment effect on two or more primary endpoints. Consider the MERIT-HF study, where two endpoints of primary interest were time to death and the earliest of time to first hospitalization or death (The International Steering Committee on Behalf of the MERIT-HF Study Group, 1997, American Journal of Cardiology 80[9B], 54J-58J). It is possible that treatment has no effect on death but a beneficial effect on first hospitalization time, or it has a detrimental effect on death but no effect on hospitalization. A good clinical trial design should permit early stopping as soon as the treatment effect on both endpoints becomes clear. Previous work in this area has not resolved how to stop the study early when one or more endpoints have no treatment effect or how to assess and control the many possible error rates for concluding wrong hypotheses. In this article, we develop a general methodology for group sequential clinical trials with multiple primary endpoints. This method uses a global alpha-spending function to control the overall type I error and a multiple decision rule to control error rates for concluding wrong alternative hypotheses. The method is demonstrated with two simulated examples based on the MERIT-HF study.  相似文献   

2.
It has been known even since relatively few structures had been solved that longer protein chains often contain multiple domains, which may fold separately and play the role of reusable functional modules found in many contexts. In many structural biology tasks, in particular structure prediction, it is of great use to be able to identify domains within the structure and analyze these regions separately. However, when using sequence data alone this task has proven exceptionally difficult, with relatively little improvement over the naive method of choosing boundaries based on size distributions of observed domains. The recent significant improvement in contact prediction provides a new source of information for domain prediction. We test several methods for using this information including a kernel smoothing‐based approach and methods based on building alpha‐carbon models and compare performance with a length‐based predictor, a homology search method and four published sequence‐based predictors: DOMCUT, DomPRO, DLP‐SVM, and SCOOBY‐DOmain. We show that the kernel‐smoothing method is significantly better than the other ab initio predictors when both single‐domain and multidomain targets are considered and is not significantly different to the homology‐based method. Considering only multidomain targets the kernel‐smoothing method outperforms all of the published methods except DLP‐SVM. The kernel smoothing method therefore represents a potentially useful improvement to ab initio domain prediction. Proteins 2013. © 2012 Wiley Periodicals, Inc.  相似文献   

3.
Summary Colorectal cancer is the second leading cause of cancer related deaths in the United States, with more than 130,000 new cases of colorectal cancer diagnosed each year. Clinical studies have shown that genetic alterations lead to different responses to the same treatment, despite the morphologic similarities of tumors. A molecular test prior to treatment could help in determining an optimal treatment for a patient with regard to both toxicity and efficacy. This article introduces a statistical method appropriate for predicting and comparing multiple endpoints given different treatment options and molecular profiles of an individual. A latent variable‐based multivariate regression model with structured variance covariance matrix is considered here. The latent variables account for the correlated nature of multiple endpoints and accommodate the fact that some clinical endpoints are categorical variables and others are censored variables. The mixture normal hierarchical structure admits a natural variable selection rule. Inference was conducted using the posterior distribution sampling Markov chain Monte Carlo method. We analyzed the finite‐sample properties of the proposed method using simulation studies. The application to the advanced colorectal cancer study revealed associations between multiple endpoints and particular biomarkers, demonstrating the potential of individualizing treatment based on genetic profiles.  相似文献   

4.
Assigning functions to unknown proteins is one of the most important problems in proteomics. Several approaches have used protein-protein interaction data to predict protein functions. We previously developed a Markov random field (MRF) based method to infer a protein's functions using protein-protein interaction data and the functional annotations of its protein interaction partners. In the original model, only direct interactions were considered and each function was considered separately. In this study, we develop a new model which extends direct interactions to all neighboring proteins, and one function to multiple functions. The goal is to understand a protein's function based on information on all the neighboring proteins in the interaction network. We first developed a novel kernel logistic regression (KLR) method based on diffusion kernels for protein interaction networks. The diffusion kernels provide means to incorporate all neighbors of proteins in the network. Second, we identified a set of functions that are highly correlated with the function of interest, referred to as the correlated functions, using the chi-square test. Third, the correlated functions were incorporated into our new KLR model. Fourth, we extended our model by incorporating multiple biological data sources such as protein domains, protein complexes, and gene expressions by converting them into networks. We showed that the KLR approach of incorporating all protein neighbors significantly improved the accuracy of protein function predictions over the MRF model. The incorporation of multiple data sets also improved prediction accuracy. The prediction accuracy is comparable to another protein function classifier based on the support vector machine (SVM), using a diffusion kernel. The advantages of the KLR model include its simplicity as well as its ability to explore the contribution of neighbors to the functions of proteins of interest.  相似文献   

5.
The aim of this study was the development of an in vitro bioassay that combines several endpoints of general cytotoxicity for the screening of compounds or mixtures of compounds with potential bioactivity. The Alamar Blue assay was employed to assess metabolic activity, the Neutral Red assay was used for the assessment of membrane function and lysosomal activity, and the lactate dehydrogenase leakage assay was employed for the assessment of membrane integrity. Each assay was performed separately and in combination using a human fibroblast cell line (MRC-5). Three fungal secondary metabolites of different chemistry that affect different cellular targets were tested as model compounds: deoxynivalenol, enniatin B1, and 2-amino-14,16-dimethyloctadecan-3-ol. The obtained inhibitive compound concentrations for the assays performed separately and in combination were not significantly different (P < 0.05, n = 9). The combination of several cytotoxicity endpoints in a single assay increases the chance that potential bioactive/cytotoxic compounds are discovered during the screening of mixtures of natural compounds (e.g., extracts from fungal cultures or plants) when one endpoint fails and, at the same time, might give some basic information on the cellular target.  相似文献   

6.
Pulmonary arterial hypertension (PAH) is a serious complication of systemic sclerosis (SSc). In clinical trials PAH-SSc has been grouped with other forms, including idiopathic PAH. The primary endpoint for most pivotal studies was improvement in exercise capacity. However, composite clinical endpoints that better reflect long-term outcome may be more meaningful. We discuss potential endpoints and consider why the same measures may not be appropriate for both idiopathic PAH and PAH-SSc due to inherent differences in clinical outcome and management strategies of these two forms of PAH. Failure to take this into account may compromise progress in managing PAH in SSc.  相似文献   

7.
Summary .  Many assessment instruments used in the evaluation of toxicity, safety, pain, or disease progression consider multiple ordinal endpoints to fully capture the presence and severity of treatment effects. Contingency tables underlying these correlated responses are often sparse and imbalanced, rendering asymptotic results unreliable or model fitting prohibitively complex without overly simplistic assumptions on the marginal and joint distribution. Instead of a modeling approach, we look at stochastic order and marginal inhomogeneity as an expression or manifestation of a treatment effect under much weaker assumptions. Often, endpoints are grouped together into physiological domains or by the body function they describe. We derive tests based on these subgroups, which might supplement or replace the individual endpoint analysis because they are more powerful. The permutation or bootstrap distribution is used throughout to obtain global, subgroup, and individual significance levels as they naturally incorporate the correlation among endpoints. We provide a theorem that establishes a connection between marginal homogeneity and the stronger exchangeability assumption under the permutation approach. Multiplicity adjustments for the individual endpoints are obtained via stepdown procedures, while subgroup significance levels are adjusted via the full closed testing procedure. The proposed methodology is illustrated using a collection of 25 correlated ordinal endpoints, grouped into six domains, to evaluate toxicity of a chemical compound.  相似文献   

8.
There is growing concern over the welfare of animals used in research, in particular when these animals develop pathology. The present study aims to identify the main sources of animal distress and to assess the possible implementation of refinement measures in experimental infection research, using mouse models of tuberculosis (TB) as a case study. This choice is based on the historical relevance of mouse studies in understanding the disease and the present and long-standing impact of TB on a global scale. Literature published between 1997 and 2009 was analysed, focusing on the welfare impact on the animals used and the implementation of refinement measures to reduce this impact. In this 12-year period, we observed a rise in reports of ethical approval of experiments. The proportion of studies classified into the most severe category did however not change significantly over the studied period. Information on important research parameters, such as method for euthanasia or sex of the animals, were absent in a substantial number of papers. Overall, this study shows that progress has been made in the application of humane endpoints in TB research, but that a considerable potential for improvement remains.  相似文献   

9.
Soil microbial toxicity tests are seldom used in ecological risk assessments or in the development of regulatory criteria in the U.S. The primary reason is the lack of an explicit connection between these tests and assessment end-points. Soil microorganisms have three potential roles with respect to ecological assessment endpoints: properties of microbial communities may be end-points; microbial responses may be used to estimate effects on plant production; and microbial responses may be used as surrogates for responses of higher organisms. Rates of microbial processes are important to ecosystem function, and thus should be valued by regulatory agencies. However, the definition of the microbial assessment endpoint is often an impediment to its use in risk assessment. Decreases in rates are not always undesirable. Processes in a nutrient cycle are particularly difficult to define as endpoints, because what constitutes an adverse effect on a process is dependent on the rates of others. Microbial tests may be used as evidence in an assessment of plant production, but the dependence of plants on microbial processes is rarely considered. As assessment endpoints are better defined in the future, microbial ecologists and toxicologists should be provided with more direction for developing appropriate microbial tests.  相似文献   

10.
Visual fields measured with standard automated perimetry are a benchmark test for determining retinal function in ocular pathologies such as glaucoma. Their monitoring over time is crucial in detecting change in disease course and, therefore, in prompting clinical intervention and defining endpoints in clinical trials of new therapies. However, conventional change detection methods do not take into account non-stationary measurement variability or spatial correlation present in these measures. An inferential statistical model, denoted ‘Analysis with Non-Stationary Weibull Error Regression and Spatial enhancement’ (ANSWERS), was proposed. In contrast to commonly used ordinary linear regression models, which assume normally distributed errors, ANSWERS incorporates non-stationary variability modelled as a mixture of Weibull distributions. Spatial correlation of measurements was also included into the model using a Bayesian framework. It was evaluated using a large dataset of visual field measurements acquired from electronic health records, and was compared with other widely used methods for detecting deterioration in retinal function. ANSWERS was able to detect deterioration significantly earlier than conventional methods, at matched false positive rates. Statistical sensitivity in detecting deterioration was also significantly better, especially in short time series. Furthermore, the spatial correlation utilised in ANSWERS was shown to improve the ability to detect deterioration, compared to equivalent models without spatial correlation, especially in short follow-up series. ANSWERS is a new efficient method for detecting changes in retinal function. It allows for better detection of change, more efficient endpoints and can potentially shorten the time in clinical trials for new therapies.  相似文献   

11.
12.
In clinical trials with time‐to‐event outcomes, it is of interest to predict when a prespecified number of events can be reached. Interim analysis is conducted to estimate the underlying survival function. When another correlated time‐to‐event endpoint is available, both outcome variables can be used to improve estimation efficiency. In this paper, we propose to use the convolution of two time‐to‐event variables to estimate the survival function of interest. Propositions and examples are provided based on exponential models that accommodate possible change points. We further propose a new estimation equation about the expected time that exploits the relationship of two endpoints. Simulations and the analysis of real data show that the proposed methods with bivariate information yield significant improvement in prediction over that of the univariate method.  相似文献   

13.
An important aim in clinical studies in oncology is to study how treatment and prognostic factors influence the course of disease of a patient. Typically in these trials, besides overall survival, also other endpoints such as locoregional recurrence or distant metastasis are of interest. Most commonly in these situations, Cox regression models are applied for each of these endpoints separately or to composite endpoints such as disease-free survival. These approaches however fail to give insight into what happens to a patient after a first event. We re-analyzed data of 2795 patients from a breast cancer trial (EORTC 10854) by applying a multi-state model, with local recurrence, distant metastasis, and both local recurrence and distant metastasis as transient states and death as absorbing state. We used an approach where the clock is reset on entry of a new state. The influence of prognostic factors on each of the transition rates is studied, as well as the influence of the time at which intermediate events occur. The estimated transition rates between the states in the model are used to obtain predictions for patients with a given history. Formulas are developed and illustrated for these prediction probabilities for the clock reset approach.  相似文献   

14.
Assessment of cytotoxicity by impedance spectroscopy   总被引:1,自引:0,他引:1  
This paper describes a simple and convenient method to monitor on-line cell adhesion by electrical impedance measurements. Immortalized mouse fibroblasts, BALB/3T3, were cultured onto interdigitated electrode structures integrated into the bottom of an in-house fabricated device. Impedance modulus, phase, real and imaginary parts were considered separately and plotted as function of frequency and time to better understand and select the component giving more information on cell adhesion changes. For cytotoxicity assessment, the cells were treated with different concentrations of sodium arsenite used as model toxicant and their responses were monitored on-line. The half inhibition concentration, the required concentration to achieve 50% inhibition, derived from the measurements fall between the results obtained using standard 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide test and colony forming efficiency assay confirming the good sensitivity of the system. In term of impedance signal, the modulus results was found to be the most sensitive of the considered components for cytotoxicity testing of chemicals.  相似文献   

15.
A speech enhancement scheme is presented using diverse processing in sub-bands spaced according to a human-cochlear describing function. The binaural adaptive scheme decomposes the wide-band input signals into a number of band-limited signals, superficially similar to the treatment the human ears perform on incoming signals. The results of a series of intelligibility and formal listening tests are presented in which acoustic speech signals corrupted with recorded automobile noise were presented to 15 normal hearing volunteer subjects. For the experimental cases considered, the proposed binaural adaptive sub-band processing scheme delivers a statistically significant improvement in terms of both speech-intelligibility and perceived quality when compared with both the conventional wide-band processed and the noisy unprocessed case. The scheme is capable of extension to a potentially more flexible sub-band processing method based on a constrained artificial neural network (ANN).  相似文献   

16.
The algorithm of display of 90Sr behaviour mechanisms in forest ecosystems by method of imitating modeling is developed. Distinctive features of algorithm: the 90Sr contents in vegetation is subdivided into two parts (outside and internal pollution), which dynamics is considered separately; dynamics of a radionuclide is considered in connection with dynamics of organic substance; it is supposed, that 90Sr behaviour in plants is similar to Ca behaviour; the biological availability 90Sr contained in a soil, is integrated function of time and physico-chemical properties of the given soil. On the basis of offered algorithm the model is constructed which is used for realization of a number of numerical experiments, including reconstruction of a situation of pollution of forest ecosystem on grey forest soils in result of Kyshtym accident. The quantitative estimations of intensity of 90Sr redistribution between stand components and soil are received. The modern problems of creation of prognostication models of 90Sr dynamics in the forest ecosystems are discussed.  相似文献   

17.
We revisit the assumptions associated with the derivation and application of species sensitivity distributions (SSDs). Our questions are (1) Do SSDs clarify or obscure the setting of ecological effects thresholds for risk assessment? and (2) Do SSDs reduce or introduce uncertainty into risk assessment? Our conclusions are that if we could determine a community sensitivity distribution, this would provide a better estimate of an ecologically relevant effects threshold and therefore be an improvement for risk assessment. However, the distributions generated are typically based on haphazard collections of species and endpoints and by adjusting these to reflect more realistic trophic structures we show that effects thresholds can be shifted but in a direction and to an extent that is not predictable. Despite claims that the SSD approach uses all available data to assess effects, we demonstrate that in certain frequently used applications only a small fraction of the species going into the SSD determine the effects threshold. If the SSD approach is to lead to better risk assessments, improvements are needed in how the theory is put into practice. This requires careful definition of the risk assessment targets and of the species and endpoints selected for use in generating SSDs.  相似文献   

18.
The behavior of two individuals, consisting of effort which results in output, is considered to be determined by a satisfaction function which depends on remuneration (receiving part of the output) and on the effort expended. The total output of the two individuals is not additive, that is, together they produce in general more than separately. Each individual behaves in a way which he considers will maximize his satisfaction function. Conditions are deduced for a certain relative equilibrium and for the stability of this equilibrium, i.e., conditions under which it will not “pay” the individual to decrease his efforts. In the absence of such conditions “exploitation” occurs which may or may not lead to total parasitism. Some forms of the inverse problem are considered, where the form of behavior is given and forms of the satisfaction function are deduced which lead to it.  相似文献   

19.
For decades, molecular clocks have helped to illuminate the evolutionary timescale of life, but now genomic data pose a challenge for time estimation methods. It is unclear how to integrate data from many genes, each potentially evolving under a different model of substitution and at a different rate. Current methods can be grouped by the way the data are handled (genes considered separately or combined into a 'supergene') and the way gene-specific rate models are applied (global versus local clock). There are advantages and disadvantages to each of these approaches, and the optimal method has not yet emerged. Fortunately, time estimates inferred using many genes or proteins have greater precision and appear to be robust to different approaches.  相似文献   

20.
Summary This paper proposes a method for predicting the performance of multiple cross hybrids on the basis of single cross information, taking into account the specific interaction of the genotypes with the environment.In the prediction model the genetical constants are those used for combining ability analysis, while genotype-environmental interaction terms are defined as linear regression of the genotypical effects on environmental variables.The model was tested by considering the variations arising from the effects of population density; therefore the method was applied in a situation in which the problem was to select the best hybrid-population density combinations.The results obtained show that the model is suitable to represent phenotypical response across densities.However, the material used was not the most suitable to emphasize the improvement of the predictive power of the function when genotype-environmental parameters are considered.This work was supported by a grant from the Consiglio Nazionale delle Ricerche (70.10298.06.115.140).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号